Why Do We Need Model-Based Systems Engineering?

MBSE is one of the latest buzzwords to hit the development community.

The main idea was to transform the systems engineering approach from “document-centric” to “model-centric.” Hence, the systems engineer would develop models of the system instead of documents.

But why? What does that buy us? Switching to a model-based approach helps: 1) coordinate system design activities; 2) satisfy stakeholder requirements; and 3) provide a significant return on investment.

Coordinating System Design Activities

The job of a systems engineer is in part to lead the system design and development by working with the various design disciplines to optimize the design in terms of cost, schedule, and performance. The problem with letting each discipline design the system without coordination is shown in the comic.

If each discipline optimized for their area of expertise, then the airplane (in this case) would never get off the ground. The systems engineer works with each discipline and balances the needs in each area.

MBSE can help this coordination by providing a way to capture all the information from the different disciplines and share that information with the designers and other stakeholders. Modern MBSE tools, like Innoslate, provide the means for this sharing, as long as the tool is easy for everyone to use. A good MBSE tool will have an open ontology, such as the Lifecycle Modeling Language (LML); many ways to visualize the information in different interactive diagrams (models); ability to verify the logic and modeling rules are being met; and traceability between all the information from all sources.

Satisfying Stakeholder Requirements

Another part of the systems engineers’ job is to work with the customers and end-users who are paying for the product. They have “operational requirements” that must be satisfied so that they can meet their business needs. Otherwise they will no longer have a business.

We use MBSE tools to help us analyze those requirements and manage them to ensure they are met at the end of the product development. As such, the systems engineer becomes the translator from the electrical engineers to the mechanical engineers to the computer scientists to the operator of the system to the maintainer of the system to the buyer of the system. Each speaks a different language. The idea of using models was a means to provide this communications in a simple, graphical form.

We need to recognize that many of the types of systems engineering diagrams (models) do not communicate to everyone, particularly the stakeholders. That’s why documents contain both words and pictures. They communicate not only the visual but explain the visual image to those who do not understand it. We need an ontology and a few diagrams that seem familiar to almost anyone. So, we need something that can model the system and communicate well with everyone.

Perhaps the most important thing about this combined functional and physical model is it can be tested to ensure that it works. Using discrete event simulation, this model can be executed to create timelines, identify resource usage, and cost. In other words, it allows us to optimize cost, schedule, and performance of the system through the model. Finally, we have something that helps us do our primary job. Now that’s model-based systems engineering!

Provides a Significant Return on Investment

We can understand the idea of how systems engineering provides a return on investment from the graph.

The picture shows what happens when we do not spend enough time and money on systems engineering. The result is often cost overruns, schedule slips, reduced performance, and program cancellations. Something not shown on the graph, since it is NASA-related data for unmanned satellites, is the potential loss of life due to poor systems engineering.

MBSE tools help automate the systems engineering process by providing a mechanism to not only capture the necessary information more completely and traceably, but also verify that the models work. If those tools contain simulators to execute the models and from that execution provide a means to optimize cost, schedule, and performance, then fewer errors will be introduced in the early, requirements development phase. Eliminating those errors will prevent the cost overruns and problems that might not be surfaced by traditional document-centric approaches.

Another cost reduction comes from conducting model-based reviews (MBRs). An MBR uses the information within the tool to show reviewers what they need to ensure that the review evaluation criteria are met. The MBSE tool can provide a roadmap for the review using internal document views and links and provide commenting capabilities so that the reviewers’ questions can be posted. The developers can then use the tool to answer those comments directly. By not having to print copies of the documentation for everyone for the review, and then consolidate the markups into a document for adjudication, we cut out several time-consuming steps, which reduce the labor cost of the review an order of magnitude. This MBR approach can reduce the time to review and respond to the review from weeks to days.

Bottom-line

The purpose for “model-based” systems engineering was to move away from being “document-centric.” MBSE is much more than just a buzzword. It’s an important application that allows us to develop, analyze, and test complex systems. We most importantly need MBSE because it provides a means to coordinate system design activity, satisfies stakeholder requirements and provides a significant return on investment.  The “model-based” technique is only as good the MBSE tool you use, so make sure to choose a good one.

Innoslate’s Ontology Webinar

Live Webinar June 6th at 2:30 pm EST

Everyone talks about “data-centricity,” but what does that mean in practical terms. It means that you have to have a well defined ontology that can capture the information needed to describe the architecture or system you work with or want to create. An ontology is simply the taxonomy of entity classes (bins of information) and how those classes are related to each other.

You’ll learn a relatively new ontology, the Lifecycle Modeling Language (LML). LML provides the basis for Innoslate’s database schema. In this webinar, we will discuss each entity class and why it was developed. Dr. Steven Dam, who is the Secretary of the LML Steering Committee, will present the details of the language and how it relates to other ontologies/languages, such as the DoDAF MetaModel 2.0 and SysML. He will also discuss the ways to visualize this information to enhance understanding of the information and how to use that information to make decisions about the architecture or system.

Join us live on July 6th at 2:30 pm EST.

After July 6th 2017, watch the recording here.

Why Requirements Management Needs Analysis

When we talk to potential customers who are seeking requirements management tools, we begin with asking them: “What is your requirements management process?” Usually they say, “we get requirements from our customers and then trace them to our products.” In areas where the customer knows exactly what they need and have been doing it for many, many years, this might work. But in the fast paced businesses of today, that approach will miss a lot of the real requirements, gaps, and risk areas. Ignorance of these requirements usually means product failure or worse, loss of life. Your company is then liable for that failure and in the US that usually means a lawsuit. If you can’t prove that you did the necessary analyses, your company will usually lose the lawsuit.

If they say they do analysis on those original customer requirements we than ask: “How do you do requirements analysis?” Often the answer is: “Well, we read them and look for shall statements.” Again, this is insufficient and will lead to failure! A requirement must be:

  • Clear: Clear represents if this Requirement is unambiguous and not confusing.
  • Complete: Complete represents if this Requirement expresses a whole idea.
  • Consistent: Consistent represents if this Requirement is not in conflict with other requirements.
  • Correct: Correct represents if this Requirement describes the user’s true intent and is legally possible.
  • Design: Design represents if this Requirement does not impose a specific solution on design; says “what”, not “how”.
  • Feasible: Feasible represents if this Requirement is able to be implemented with existing technology, and within cost and schedule.
  • Modular: Modular represents if this Requirement can be changed without excessive impact on other requirements.
  • Traceable: Traceable represents if this Requirement is uniquely identified, and able to be tracked to predecessor and successor lifecycle items/objects.
  • Verifiable: Verifiable represents if this Requirement is provable (within realistic cost and schedule) that the system meets the requirement.

If they say they understand the need for complete requirements analysis, then we ask: “Did you know that functional analysis and modeling is a critical part of requirements analysis and management?” Again, most of the time the answer we receive is: “No, we don’t need that! All we need is a cheap tool that traces the requirements to product.” As you might expect this approach also leads to failure.

The Department of Defense (DoD) recognizes this problem. They define the purpose of requirements analysis as:

“To (1) obtain sets of logical solutions [functional architectures] derived from defined Technical Requirements, and (2) to better understand the relationships among them. Once these logical solution sets are formed, allocated performance parameters and constraints are set, and the set of Derived Technical Requirements for the system can be initially defined.

Outcomes: For each end product of a system model:

  1. A set of logical models showing relationships (e.g., behavioral, functional, temporal, data flows, etc.) among them
  2. A set of ‘design-to’ Derived Technical Requirements”

Most experts use a process like the one below:

Notice that functional analysis and allocation is in the center of this process. Other process models have this as well, but tend to bury it. That is a mistake. What functional analysis does is enable the analyst to create user stories (sometimes called use cases, scenarios, or threads) that can be used to validate the user requirements. As you conduct this analysis by creating a functional model, you can identify gaps and problem areas that the user may not have originally considered. You can also use these models to derive the performance requirements and identify other constraints using simulation.

The figure below shows a functional model of a process for fighting forest fires. It includes data entities and resources used in the operation. The rounded rectangular blocks represent the functional requirements for the operation. Even though some of these functions may currently be performed by people, many of them can be automated to enhance the performance of the overall system. Many of these functional requirements would be missing if you just asked people what the requirements were.

We can better understand the performance requirements by execution of this model in a simulation. An example of the results from a discrete event simulation is shown below:

 

These results show the time, cost and resources (water) used to put out the fire. By running these kinds of simulations, we can easily determine cost, schedule and performance metrics needed to accomplish the operation. Isn’t it better to know that early in the design phase rather than later?

The end result of performing these analyses is a much better set of requirements to build or buy what the users need. It will also build the case that you did the due diligence in this litigious society. That could mean the difference between a successful project and a failed project.