How to Keep MBSE from Becoming Just a Buzzword (or Is It Too Late?)

The term “Model-Based Systems Engineering” or “MBSE” has been around for nearly a decade. We see the term in requests for proposals, marketing materials, social media, conferences and many other places in the systems engineering community and even in the general public. Clearly, MBSE has become an important part of systems engineering, but has it also become the definition of a buzzword? First, take a look at the definition of a buzzword.

buzz·word

[buhz-wurd]

NOUN

  1. a word or phrase, often sounding authoritative or technical, that is a vogue term in a particular profession, field of study, popular culture, etc.

Source: Dictionary.com

So, it definitely sounds authoritative, as it comes from the “International Council on Systems Engineering” (INCOSE). It sounds technical, using “Model-Based” and “Systems Engineering.” And clearly, it’s “in vogue,” from its appearance everywhere.

 

What the definition of a buzzword doesn’t seem to provide is the way a buzzword has a negative context or as Dilbert put it:

       

 

What this means is that a buzzword is used by people who don’t really know what it means. I’m sure we have all heard many people use it without any idea of what it means. So, what does MBSE really mean?

 

Well to understand its real meaning, we need to review the definition of MBSE from INCOSE:

 “Model-based systems engineering (MBSE) is the formalized application of modeling to support system requirements, design, analysis, verification and validation, beginning in the conceptual design phase and continuing throughout development and later life cycle phases.” – INCOSE

 

As systems engineers, the first thing we want to do is decompose this rather long sentence. It can be broken down into two parts:

  • Modeling (formalized application); and
  • Lifecycle (system requirements, design, analysis, verification and validation).

 

The formalized application of modeling means that we create models of the system using a “standard.” We know that there are a number of formal and informal standards, which are applied in many different ways. The standard most are familiar with is SysML since it is a profile of UML. SysML focuses on communicating with the software community primarily. The Lifecycle Modeling Language (LML) open standard (www.lifecyclemodeling.org), covers the second part of the definition better, as its name implies. It also addresses the program management aspects of systems engineering (risk, cost, schedule, etc.), none of which is really addressed by SysML.

 

But we have been creating drawings, which are a type of model, since well before anyone called the discipline systems engineering. So, what makes this term different from classic systems engineering?

 

The key difference is the type of modeling we use when we talk about MBSE. We mean the development of “computable models.” Computable models are models based on data (usually in a standard ontology, like the one LML provides) that can be visualized in standard ways (again using any drawing standard, which both SysML and LML provide). These models can also be tested to determine their validity and to make sure we don’t introduce errors in logic or problems related to dynamic constraints (i.e., lack of resources, bandwidth, latencies, etc.). This testing also includes checking the models against general rules of quality, such as “all function names should start with a verb.” The tools for this kind of testing today include simulation (e.g., discrete event, Monte Carlo) and natural language processing (NLP).

 

Having models that can be tested and testing them is a clear way to make MBSE real and not a buzzword. Therefore, to implement MBSE you need a tool or set of tools to conduct this testing.

 

When considering a “MBSE” tool, you will hear claims from almost all of the tool vendors that they are one. To distinguish between those who deliver on the promise of MBSE and those who are treating it as a buzzword, just ask the following questions:

 

  1. Are your diagrams essentially drawings or are they automatically generated from the data?
  2. If I make a change to one piece of data in the database is that automatically updated in all the other visualizations of that piece of data, including the diagrams?
  3. Can I execute the models using strong simulation techniques?
  4. Do those simulation techniques include discrete event and Monte Carlo?
  5. Do the simulations take into account resource, latency, and bandwidth constraints?
  6. Does your tool test the entire model against common standards of good practice (heuristics)?
  7. Does your tool support the entire lifecycle (system requirements, design, analysis, verification and validation) in a seamless, integrated fashion?

 

If you ask all these questions, you will find a limited set of tools that can even come close to keeping MBSE from just being a buzzword. So, it’s essential that you carefully evaluate these tools to make sure they provide the support you need to become more productive and produce higher quality products. To see a tool that does meet all these needs check out www.innoslate.com.

Quick Guide to Innoslate’s Ontology

Innoslate uses the Lifecycle Modeling Language (LML) ontology as the basis for the tool’s database schema. For those new to the word “ontology,” it’s simply the set of classes and relationships between them that form the basis for capturing the information needed. We look at this in a simple Entity-Relationship-Attribute (ERA) form. This formulation has a simple parallel to the way we look at most languages: entities represent nouns; relationships represent verbs; attributes on the entity represent adjectives; and attributes on relationships represent adverbs.

LML contains twelve (12) entity classes and eight (8) subclasses. They represent the basic elements of information needed to describe almost any system. The figure below shows how they can be grouped to create the models needed for this description.

Most of these entity classes have various ways to visualize the information, which are commonly called models or diagrams. The benefit of producing the visualizations using this ontology means that when you create one model, other models that use the same information will automatically have that information available.

All these entities are linked to one another through the relationships. The primary relationships are shown below.

 

This language takes a little getting used to, like any other language. For example, you might be used to referring to something functional as a Function or Activity. These are both “types” of Actions in LML and implemented as labels in Innoslate. Similarly, you may be used to using different relationship names for parents and children for different entity classes. However, by using the same verbs for the parent-child relationships you can avoid confusion in having to remember all the different verbs.

You still might need other ontological additions. LML was meant to be the “80% solution.” You should look very closely at the ontology, as often you only need to add types (labels) or an attribute here and there. Hopefully, you will rarely need to add new classes and relationships. If you do add new classes, try to do so as subclasses to existing ones, so that you inherit the diagrams as well. For example, when the Innoslate development team added the new Test Center, they decided they needed to extend the Action class. This enables the TestCase class to inherit the Action class and other functional diagrams, as well as the status, duration, and other attributes that were important.

Hopefully, you can see the benefits of using LML as the basis for Innoslate’s schema. It was designed to be:

  • Broad (covers the entire lifecycle – technical and programmatic)
  • Ontology-based (enables translation from LML to other languages and back)
  • All the capabilities of SysML (with LML v1.1 extensions) and DoDAF
  • Simple structure
  • Useful for stakeholders across the entire lifecycle

For more information, see www.lifeyclemodeling.org and visit the Help Center at help.innoslate.com.

Great Read “Enhancing MBSE with LML”

Review on “Enhancing Model-Based Systems Engineering with the Lifecycle Modeling Language” by Dr. Warren Vaneman

Dr. Vaneman’s paper on “Enhancing Model-Based Systems Engineering with the Lifecycle Modeling Language” provides a compelling justification for the need of a simpler, yet more complete, language, which integrates systems engineering with program management to support the entire systems lifecycle. It shows that the current LML standard 1.1 includes all the key features of the Systems Modeling Language (SysML), and thus can be used by people who practice systems engineering to generate the complete SysML diagram set.

This paper expresses the key goals of LML: “1) to be easy to understand; to be easy to extend; to support both the functional and object-oriented (O-O) approaches within the same design; 4) to be a language that can be understood by most system stakeholders, not just systems engineers; 5) To support the entire system’s lifecycle – cradle to grave; and 6) to support both evolutionary and revolutionary system changes to system plans and designs over the lifespan of the system.”

Dr. Vaneman covers three themes in the rest of the paper: 1) overview of legacy modeling and in introduction to LML; 2) comparison of SysML and LML using eight MBSE effectiveness measures; and 3) the potential to use LML as an ontology for SysML. Of particular interest was the comparison of SysML to LML. The major problem with SysML is the lack of an ontology, which makes it less expressive and precise. SysML seems to have problems in usability as well, due to the complexity of the diagraming notations.

Although a preliminary mapping of LML to SysML was done as part of the first release of the standard, the 1.1 version only had to be slightly modified to more fully visualize all the SysML diagrams. Only two new entity classes were defined (Equation and Port). Equation was developed to support the Parametric Diagram, which diagrams equations, and the Port, which is a subclass of Asset, was essential for a couple of the physical modeling diagrams.

I heartily agree with Dr. Vaneman’s conclusion that “LML provides a means to improve how we model system functionality to ensure functions are embedded in the design at the proper points and captured as part of the functional and physical requirements needed for design and test.”

You can read Warren Vaneman’s paper here:

IEEE MBSE Paper- Vaneman

More resources:

-LML website

Quick Guide to Innoslate’s Ontology

 

10 Qualities That Make a Good Systems Engineer

All systems engineers should have an understanding of basic concepts and a strong technical background, but these qualities go beyond just the necessities. From 40+ years of experience, I have found that a good systems engineer must have the following 10 qualities.

#1 Patience and Perseverance

To create a complicated system, an engineer must have a lot of patience and perseverance. The more complex the system the longer and more tedious a project it becomes. An engineer cannot figure out everything at once. It takes time to see the big picture, to look for all the small details. You will test and test and still find errors. You have to have patience to know that it takes time and determination to keep going after hundreds of failed attempts.

#2 Ability to Know When You Are Done

A good systems engineer wants their project to be flawless, but often it’s too easy to fall into a perfectionist trap. You tell yourself, “One more change and it will be perfect.” However, doing this may mean you never complete your project and all that hard work will become obsolete. The best engineers know when their system is good enough and when the system needs a little more re-engineering.

#3 An Analytical Brain

Most engineers are naturally analytical, which is probably why they were attracted to the field in the first place. From the moment they could talk, they were the ones that continually asked questions and analyzed the world around them. A good systems engineer can go one step further than just analyzing and look for solutions to the problems and questions they analyze.

#4 Knowledge of Systems Engineering Software Tool(s)

In this day and age all systems engineers should have some experience with tools. Most colleges, especially grad school level, use systems engineering software tools. These tools allow you to create complex systems. They help you organize your information and develop documentation and reports at a much quicker pace and with higher accuracy. They can also help you analyze your information better. Even though you should already be a pro at analyzing, using a tool can help your organize the information in a way that makes analyzing faster and easier. Tools can make you into a better systems engineer. Tools, such as Innoslate®, are capable of improving you as a systems engineer.

#5 Strong Organizational Skills

You need organizational skills in order to handle the amount of information that a systems engineer deals with on a regular basis. It is important to organize well, so you are able to track status and history accurately and create documents and reports that are understandable. Although a tool can greatly improve the way you organize, you still need to understand organizational concepts.

#6 Ability to See the Small Picture

One of the greatest qualities a systems engineer can have is to be detailed oriented. You should be able to look at the small picture and see that all the details are thoroughly reviewed and that no errors occur. You need to be detail oriented type of person. Much of what we do is planning. Just like if you are an event planner, you have to make sure all the details are just right to make the ultimate goal (the event) a success.

#7 Ability to See the Big Picture

The overall system needs to be looked at just as much as the small details that make up the system. You need to make sure that the goal of the entire system is kept in mind throughout the planning. A good systems engineer needs to be able to determine future needs as well. They must have vision (I talk about this in my upcoming book on LML) and be detail oriented, but still be able to see the big picture.

#8 Well Rounded Background

A bad systems engineer knows systems engineering concepts and definitions like the back of his hand, but knows nothing else. A good systems engineering tries to be knowledgeable in other subjects relating to their field. A great systems engineer understands the importance of being well-rounded. A well rounded background will help a systems engineer analyze and find potential issues better than anyone else.

#9 Communication Skills

Unfortunately, English is not a high priority for many engineering colleges. Systems engineers need to communicate well. They need to be able to communicate to non-engineers. Communication skills take time and practice to perfect. If you are a systems engineer and you know that communication is not a strong skill of yours, make the effort to improve.

#10 Ability to Lead, Follow and Work Well in a Team

At some point in your career you will have led, followed, and worked in a team. The best systems engineers know how to do all three well. A good leader knows how to follow and work together with others. A leader understands what his or her team needs to know and understand. The inability to do all three can be detrimental to a project. Systems engineers, more often than not, do extremely important work and need a good leader and a good team to follow.

 

It takes a lot of time to develop all these qualities. I know I did not have all of them when I began my career. Don’t let this discourage you, but make it a goal to obtain each one of these qualities.

If you think you have these qualities, join our team.

Reposted from SPEC Innovations with permission.

 

Document Trees in Innoslate

Different levels of documents result from decomposition of user needs to component-level specifications, as shown in the figure below.

Innoslate enables the user to create such trees as a Hierarchy chart, which uses the “decomposed by” relationship” to show the hierarchy. An example is shown below.

Each of these Artifacts contain requirements at the different levels. Those requirements may be related to one another using the “relates from/relates to” relationship if they are peer-to-peer (i.e. at the same level of decomposition) or using the decomposed by relationship to indicate that they were derived from the higher-level requirement.

This approach allows you to reuse, rather than recreate requirements from a higher-level document. An example is shown in Requirements View below.

 

In this example, the top-level Enterprise Requirements were repurposed for the Mission Needs document (MN.1.1 and MN.1.2) and the System Requirements Document (SRD.5). If you prefer to keep the original numbers, you only have to Auto Number the ERD document using that button on the menu bar and the objects would show up with the ERD prefix in the lower documents. Note that in either case, the uploaded original document would retain the original numbers, in case you wanted to reference them that way. Also, each entity has a Universal Unique Identifier (UUID) that the requirement retains, if you prefer to use that as a reference.

This approach discussed above is only one way to accomplish the development of a document tree. Innoslate enables other approaches, such as using a new relationship (i.e. derived from/derives). Try it the way above and see if it meets your needs. If not, adjust as you like.

Do Frameworks Really Work?

Over the past 20 years, we have been using various “Frameworks” to capture information about architectures and systems. Beginning perhaps with the Zachman Framework, which is still one of the most successful. The Zachman Framework is timeless for this reason: it forced us to think about the standard interrogatives (Who, What, When, Why, Where, and How) and different perspectives (Executive, Business Management, Architect, Engineer, and Technician) as applied to the enterprise. The idea was to build models in each area, starting with simple lists and working into detailed models of the systems at a component level.

Other frameworks followed, included the DoD Architecture Framework (DoDAF – which came from the C4ISR Architecture Framework), the Ministry of Defence Architecture Framework (MODAF), the NATO Architecture Framework, and many others. All these Frameworks were built on the idea that the information could be easily “binned” into these boxes, but as we know many of the models in these Frameworks included maps between different data elements, such as the CV-6 from DoDAF that maps Capabilities to Operational Activities. Recently, the Object Management Group (OMG) and others have been trying to push out a Unified Architecture Framework (UAF) that combines the models from many of these other frameworks into a single, very large framework. OMG is also responsible for developing the Unified Modeling Language (UML), Systems Modeling Language (SysML), and the Unified Profile for DoDAF and MODAF (UPDM).

All of these framework and languages are ways to capture information about a system or enterprise and show them as a set of data, often as a picture or model with two or more pieces of information in the model. An example is the DoDAF OV-5a, which is a functional hierarchy of Operational Activities, which only contains a single entity class and the decomposition relationship between those entities. Another example is the OV-5b, which shows the Operational Activities and the “Resource Flows” (which many of us recognize as inputs and outputs to/from the Operational Activities) between them. Thus, we have now two entity classes and the associated relationships between them. Obviously, the information in the CV-6, OV-5a, and OV-5b overlap in that the same Operational Activities need to show up in each of these models. But how many of these different models would we need to complete describe a system or enterprise?

An alternative way to capture information is to use an ontology (and the DoDAF 2.02 is based on a rather large ontology – the DoDAF MetaModel 2.0 or DM2) that captures the information in a finite number of classes (or bins) and a set of relationships between these classes. The classes and relationships can have attributes associated with them, which are also pieces of information. At the ontology level, all we see is a mass of data, so most of us want to see the pictures, and most architects and system engineers seem to prefer this approach.

An alternative to the DM2 is the Lifecycle Modeling Language (LML), which contains both an ontology and a set of diagrams to capture the necessary technical and program information associated with a project. This language uses an ontology that appears simpler than the DM2, but actually hides the complexity through the large number of relationships between entities and the fact that the relationships can have attributes associated with them. LML purports to be the 80% solution to the ontology, meaning that you may decide to extend it to meet your specific needs. But let’s just stick with it. LML has 20 classes of entities, and each entity class has a number of relationships associated with it (over 40 in total). So, if we ignore the attributes, we have over 20(20-1)/2=190 combinations of information possible. Does that mean I need 190 or more diagrams to visualize all this information, perhaps – it could be less, but it could be more. Can we really have that many different diagrams to represent this information, which is what a Framework would require? And of course, if we add in all the attributes to both the classes and relationships, then we are trying to display a lot more information than this.

So, Frameworks are a useful starting point, which is how John Zachman uses his framework, and it may be enough for enterprise architecture, but it’s not a panacea for all of systems engineering problems. Sooner or later as we want to decompose the system to a level we can determine what to buy or build, as well as manage these kinds of projects, you likely will need to use a more robust approach. LML and this kind of ontology makes it much easier to capture, manage, and visualize the information. See for yourself. Go to www.innoslate.com and try out the tool that uses LML for free. Explore the Schema Editor to see the entire ontology. Play with the “Getting Started” panel examples. I think once you do you will find this approach works much better than the Frameworks. In addition, Innoslate has a “DoDAF Dashboard” that enables you to create DoDAF models directly from the dashboard, so if that’s what you are more familiar with, you will find it the easiest to get started. Notice that may of the other projects are automatically populated with the information from the other models. That’s because Innoslate reuses that information to automatically create the other views!